Goto

Collaborating Authors

 penalty function








Sparse minimum Redundancy Maximum Relevance for feature selection

Naylor, Peter, Poignard, Benjamin, Climente-González, Héctor, Yamada, Makoto

arXiv.org Machine Learning

We propose a feature screening method that integrates both feature-feature and feature-target relationships. Inactive features are identified via a penalized minimum Redundancy Maximum Relevance (mRMR) procedure, which is the continuous version of the classic mRMR penalized by a non-convex regularizer, and where the parameters estimated as zero coefficients represent the set of inactive features. We establish the conditions under which zero coefficients are correctly identified to guarantee accurate recovery of inactive features. We introduce a multi-stage procedure based on the knockoff filter enabling the penalized mRMR to discard inactive features while controlling the false discovery rate (FDR). Our method performs comparably to HSIC-LASSO but is more conservative in the number of selected features. It only requires setting an FDR threshold, rather than specifying the number of features to retain. The effectiveness of the method is illustrated through simulations and real-world datasets. The code to reproduce this work is available on the following GitHub: https://github.com/PeterJackNaylor/SmRMR.



e5b294b70c9647dcf804d7baa1903918-AuthorFeedback.pdf

Neural Information Processing Systems

We appreciate the careful reviews! W TS (T, y), this could be added to Theorem 2. We don't have a suboptimality analysis for I We believe that our algorithms have other advantages over IDS . Moreover, IDS's computational complexity is IRS policy is recursive like TS: i.e., the decision at a certain moment depends only on the posterior distribution and the We absolutely agree with the fact that the stochastic MAB with independent arms has already been studied extensively.